79 research outputs found

    Losing control:the case for emergent software systems using autonomous assembly, perception and learning

    Get PDF
    Architectural self-organisation, in which different configurations of software modules are dynamically assembled based on the current context, has been shown to be an effective way for software to self-optimise over time. Current approaches to this rely heavily on human-led definitions: models, policies and processes to control how self-organisation works. We present the case for a paradigm shift to fully emergent computer software which places the burden of understanding entirely into the hands of software itself. These systems are autonomously assembled at runtime from discovered constituent parts and their internal health and external deployment environment continually monitored. An online, unsupervised learning system then uses runtime adaptation to explore alternative system assemblies and locate optimal solutions. Based on our experience to date, we define the problem space of emergent software, and we present a working case study of an emergent web server. Our results demonstrate two aspects of the problem space for this case study: that different assemblies of behaviour are optimal in different deployment environment conditions; and that these assemblies can be autonomously learned from generalised perception data while the system is online

    Real-time power cycling in video on demand data centres using online Bayesian prediction

    Get PDF
    Energy usage in data centres continues to be a major and growing concern as an increasing number of everyday services depend on these facilities. Research in this area has examined topics including power smoothing using batteries and deep learning to control cooling systems, in addition to optimisation techniques for the software running inside data centres. We present a novel real-time power-cycling architecture, supported by a media distribution approach and online prediction model, to automatically determine when servers are needed based on demand. We demonstrate with experimental evaluation that this approach can save up to 31% of server energy in a cluster. Our evaluation is conducted on typical rack mount servers in a data centre testbed and uses a recent real-world workload trace from the BBC iPlayer, an extremely popular video on demand service in the UK

    REX:a development platform and online learning approach for Runtime emergent software systems

    Get PDF
    Conventional approaches to self-adaptive software architectures require human experts to specify models, policies and processes by which software can adapt to its environment. We present REX, a complete platform and online learning approach for runtime emergent software systems, in which all decisions about the assembly and adaptation of software are machine-derived. REX is built with three major, integrated layers: (i) a novel component-based programming language called Dana, enabling discovered assembly of systems and very low cost adaptation of those systems for dynamic re-assembly; (ii) a perception, assembly and learning framework (PAL) built on Dana, which abstracts emergent software into configurations and perception streams; and (iii) an online learning implementation based on a linear bandit model, which helps solve the search space explosion problem inherent in runtime emergent software. Using an emergent web server as a case study, we show how software can be autonomously self-assembled from discovered parts, and continually optimized over time (by using alternative parts) as it is subjected to different deployment conditions. Our system begins with no knowledge that it is specifically assembling a web server, nor with knowledge of the deployment conditions that may occur at runtime

    Defining emergent software using continuous self-assembly, perception and learning

    Get PDF
    Architectural self-organisation, in which different configurations of software modules are dynamically assembled based on the current context, has been shown to be an effective way for software to self-optimise over time. Current approaches to this rely heavily on human-led definitions: models, policies, and processes to control how self-organisation works. We present the case for a paradigm shift to fully emergent computer software that places the burden of understanding entirely into the hands of software itself. These systems are autonomously assembled at runtime from discovered constituent parts and their internal health and external deployment environment continually monitored. An online, unsupervised learning system then uses runtime adaptation to continuously explore alternative system assemblies and locate optimal solutions. Based on our experience over the past 3 years, we define the problem space of emergent software and present a working case study of an emergent web server as a concrete example of the paradigm. Our results demonstrate two main aspects of the problem space for this case study: that different assemblies of behaviour are optimal in different deployment environment conditions and that these assemblies can be autonomously learned from generalised perception data while the system is online

    On using micro-clouds to deliver the fog

    Get PDF
    The cloud is scalable and cost-efficient, but it is not ideal for hosting all applications. Fog computing proposes an alternative of offloading some computation to the edge. Which applications to offload, where to, and when is not entirely clear yet due to our lack of understanding of potential edge infrastructures. Through a number of experiments, we showcase the feasibility and readiness of micro-clouds formed by collections of Raspberry Pis to host a range of fog applications, particularly for network-constrained environments

    Improving Spark Application Throughput Via Memory Aware Task Co-location:A Mixture of Experts Approach

    Get PDF
    Data analytic applications built upon big data processing frameworks such as Apache Spark are an important class of applications. Many of these applications are not latency-sensitive and thus can run as batch jobs in data centers. By running multiple applications on a computing host, task co-location can significantly improve the server utilization and system throughput. However, effective task co-location is a non-trivial task, as it requires an understanding of the computing resource requirement of the co-running applications, in order to determine what tasks, and how many of them, can be co-located. State-of-the-art co-location schemes either require the user to supply the resource demands which are often far beyond what is needed; or use a one-size-fits-all function to estimate the requirement, which, unfortunately, is unlikely to capture the diverse behaviors of applications. In this paper, we present a mixture-of-experts approach to model the memory behavior of Spark applications. We achieve this by learning, off-line, a range of specialized memory models on a range of typical applications; we then determine at runtime which of the memory models, or experts, best describes the memory behavior of the target application. We show that by accurately estimating the resource level that is needed, a co-location scheme can effectively determine how many applications can be co-located on the same host to improve the system throughput, by taking into consideration the memory and CPU requirements of co-running application tasks. Our technique is applied to a set of representative data analytic applications built upon the Apache Spark framework. We evaluated our approach for system throughput and average normalized turnaround time on a multi-core cluster. Our approach achieves over 83.9% of the performance delivered using an ideal memory predictor. We obtain, on average, 8.69x improvement on system throughput and a 49% reduction on turnaround time over executing application tasks in isolation, which translates to a 1.28x and 1.68x improvement over a state-of-the-art co-location scheme for system throughput and turnaround time respectively

    Hierarchical Self-awareness and Authority for Scalable Self-integrating Systems

    Get PDF
    System self-integration from open sets of components provides the basis for open adaptability to unpredictable environments. Hierarchical architectures are essential for enabling such systems to scale, as they allow to compromise between processing detailed knowledge in parallel and coordinating parallel processes from a more abstract viewpoint; recursively. This position paper aims to bring to the fore the following key design aspect of such hierarchical systems: how should the authority of decision and action be assigned across hierarchical levels, with respect to the self-awareness capabilities of these levels, The difficulty lays in that all levels lack knowledge, which may be key to certain decisions, because lower levels have detailed knowledge but within a narrow scope (good for local customisation), and higher levels have a broader scope but no details (good for global coordination). We highlight the most obvious authority schemes available and discuss their advantages and shortcomings: top-down, bottom-up, and iterative (yoyo). We discuss three detailed application examples from our previous work on hierarchical systems, pointing-out the knowledge and authority schemes employed and the possible alternatives. This provides a basis for offering system designers the necessary understanding and tools for taking the appropriate decisions with respect to the distribution of self-awareness capabilities and authority of decision and action across hierarchical system levels

    Towards Emergent Microservices for Client-Tailored Design

    Get PDF
    Contemporary systems are increasingly complex, with both large codebases and constantly changing environments which make them challenging to develop, deploy and manage. We consider two recent efforts to tackle this complexity: microservices and emergent software. Microservices have gained recent popularity in industry, in which monoliths of software are broken down into compositions of single-objective, end-to-end services running on HTTP which can be scaled out on cloud hosting systems. From the research community, the emergent systems concept demonstrates promise in using real-time learning to autonomously compose and optimise software systems from small building blocks, rapidly finding the best behavioural composition to match the current deployment conditions. We argue that emergent software and microservice architectures have strong potential for synergy in complex systems, offering mutually compatible lessons in dealing with complexity via scale-out design and real-time client-tailored behaviour. We explore self-designing microservices, built with emergent software, to demonstrate the complementary boundaries of both concepts - and how future intersections may offer novel architectures that lie at a compelling point between human- and machine-designed systems. We present the conceptual synergy and demonstrate a specific microservice architecture for a smart city example where scoped microservices are continually self-composed according to the demands of the applications and operating environment. For the purpose of reproducibility of the study, we make available all the code used in the evaluation of the proposed approach

    Partner-assisted emotional disclosure for patients with gastrointestinal cancer: Results from a randomized controlled trial

    Get PDF
    For patients with cancer who are married or in an intimate relationship, their relationships with their partners play a critical role in their adaptation to their illness. However, cancer patients and their partners often have difficulty in talking with each other about their cancer-related concerns. Difficulties in communication may ultimately compromise both the patient-partner relationship and the patient's psychological adjustment. The present study tested the efficacy of a novel partner-assisted emotional disclosure intervention in a sample of patients with gastrointestinal (GI) cancer

    The Stakes in Bayh-Dole: Public Values Beyond the Pace of Innovation

    Get PDF
    Evaluation studies of the Bayh-Dole Act are generally concerned with the pace of innovation or the transgressions to the independence of research. While these concerns are important, I propose here to expand the range of public values considered in assessing Bayh-Dole and formulating future reforms. To this end, I first examine the changes in the terms of the Bayh-Dole debate and the drift in its design. Neoliberal ideas have had a definitive influence on U.S. innovation policy for the last thirty years, including legislation to strengthen patent protection. Moreover, the neoliberal policy agenda is articulated and justified in the interest of “competitiveness.” Rhetorically, this agenda equates competitiveness with economic growth and this with the public interest. Against that backdrop, I use Public Value Failure criteria to show that values such as political equality, transparency, and fairness in the distribution of the benefits of innovation, are worth considering to counter the “policy drift” of Bayh-Dole
    corecore